PyDigger - unearthing stuff about Python


NameVersionSummarydate
llm-agent-protector 0.1.0 Polymorphic Prompt Assembler to protect LLM agents from prompt injection and prompt leak 2025-07-10 23:16:57
agentic_security 0.4.5 Agentic LLM vulnerability scanner 2025-02-15 11:36:15
agentdojo 0.1.26 A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents 2025-02-12 08:29:46
llama-index-packs-zenguard 0.3.0 llama-index packs zenguard integration 2024-11-17 22:43:21
llama-index-packs-llama-guard-moderator 0.3.0 llama-index packs llama_guard_moderator integration 2024-11-17 22:42:41
prompt-protect 0.1 An NLP classification for detecting prompt injection 2024-09-02 22:57:55
llm-guard 0.3.15 LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure. 2024-08-22 19:39:48
langalf 0.0.4 Agentic LLM vulnerability scanner 2024-04-15 12:40:16
last_layer 0.1.32 Ultra-fast, Low Latency LLM security solution 2024-04-05 12:38:46
hourdayweektotal
4913378352295568
Elapsed time: 1.68217s